The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.
translated by 谷歌翻译
In speech recognition, it is essential to model the phonetic content of the input signal while discarding irrelevant factors such as speaker variations and noise, which is challenging in low-resource settings. Self-supervised pre-training has been proposed as a way to improve both supervised and unsupervised speech recognition, including frame-level feature representations and Acoustic Word Embeddings (AWE) for variable-length segments. However, self-supervised models alone cannot learn perfect separation of the linguistic content as they are trained to optimize indirect objectives. In this work, we experiment with different pre-trained self-supervised features as input to AWE models and show that they work best within a supervised framework. Models trained on English can be transferred to other languages with no adaptation and outperform self-supervised models trained solely on the target languages.
translated by 谷歌翻译
3D human whole-body pose estimation aims to localize precise 3D keypoints on the entire human body, including the face, hands, body, and feet. Due to the lack of a large-scale fully annotated 3D whole-body dataset, a common approach has been to train several deep networks separately on datasets dedicated to specific body parts, and combine them during inference. This approach suffers from complex training and inference pipelines because of the different biases in each dataset used. It also lacks a common benchmark which makes it difficult to compare different methods. To address these issues, we introduce Human3.6M 3D WholeBody (H3WB) which provides whole-body annotations for the Human3.6M dataset using the COCO Wholebody layout. H3WB is a large scale dataset with 133 whole-body keypoint annotations on 100K images, made possible by our new multi-view pipeline. Along with H3WB, we propose 3 tasks: i) 3D whole-body pose lifting from 2D complete whole-body pose, ii) 3D whole-body pose lifting from 2D incomplete whole-body pose, iii) 3D whole-body pose estimation from a single RGB image. We also report several baselines from popular methods for these tasks. The dataset is publicly available at \url{https://github.com/wholebody3d/wholebody3d}.
translated by 谷歌翻译
我们为视频中的无监督对象细分提出了一种简单而强大的方法。我们引入了一个目标函数,其最小值代表输入序列上主要显着对象的掩码。它仅依赖于独立的图像特征和光流,可以使用现成的自我监督方法获得。它以序列的长度缩放,不需要超级像素或稀疏,并且在没有任何特定培训的情况下将其推广到不同的数据集。该目标函数实际上可以从应用于整个视频的光谱群集形式得出。我们的方法通过标准基准(Davis2016,segtrack-v2,fbms59)实现了PAR的性能,同时在概念上且实际上更简单。代码可从https://ponimatkin.github.io/ssl-vos获得。
translated by 谷歌翻译
对抗机器学习是一个新兴领域,显示了深度学习模型的脆弱性。探索攻击方法以挑战艺术人工智能状态(A.I.)模型是一个关键问题的领域。这种A.I.的可靠性和鲁棒性模型是越来越多的有效对抗攻击方法的主要问题之一。分类任务是对抗攻击的主要脆弱区域。大多数攻击策略都是针对彩色或灰色尺度图像开发的。因此,对二进制图像识别系统的对抗性攻击尚未得到充分研究。二进制图像是带有单个通道的简单两个可能的像素值信号。与彩色和灰色缩放图像相比,二进制图像的简单性具有显着优势,即计算效率。此外,大多数光学角色识别系统(O.C.R.S),例如手写字符识别,板号识别和银行检查识别系统,在其处理步骤中使用二进制图像或二进制化。在本文中,我们提出了一种简单而有效的攻击方法,有效的组合黑盒对抗攻击,对二进制图像分类器。我们在两个不同的数据集和三个分类网络上验证了攻击技术的效率,以证明其性能。此外,我们将提出的方法与有关优势和缺点以及适用性的最先进方法进行了比较。
translated by 谷歌翻译
双线性动力系统在许多不同的域中无处不在,也可以用于近似更通用的控制型系统。这激发了从系统状态和输入的单个轨迹中学习双线性系统的问题。在温和的边际均方稳定性假设下,我们确定需要多少数据来估算未知的双线性系统,直至具有高概率的所需精度。就轨迹长度,系统的维度和输入大小而言,我们的样本复杂性和统计错误率是最佳的。我们的证明技术依赖于Martingale小球条件的应用。这使我们能够正确捕获问题的属性,特别是我们的错误率不会随着不稳定性的增加而恶化。最后,我们表明数值实验与我们的理论结果良好。
translated by 谷歌翻译
在本文中,我们介绍了计算机视觉研讨会上的女性 - WICV 2022,与路易斯安那州新奥尔良的混合CVPR 2022一起组织。它为计算机视觉社区中的少数(女性)群体提供了声音,并着重于提高这些研究人员在学术界和工业中的可见性。 WICV认为,这样的事件可以在降低计算机视觉领域的性别失衡方面发挥重要作用。 WICV每年都会组织a)a)从少数群体的研究人员之间合作的机会,b)指导女性初级研究人员,c)向演示者提供财政支持,以克服货币负担,D)榜样的大量选择,他们可以在职业生涯开始时,是年轻研究人员的例子。在本文中,我们介绍了有关研讨会计划的报告,过去几年的趋势,关于WICV 2022讲习班的演示者,与会者和赞助的统计摘要。
translated by 谷歌翻译
一种共同的销售策略涉及让帐户高管(AES)积极联系并与潜在客户联系。但是,并非所有的接触尝试都有积极的效果:有些尝试不会改变客户的决策,而另一些尝试甚至可能会干扰所需的结果。在这项工作中,我们建议使用因果推断来估计与每个潜在客户联系并相应地制定联系政策的效果。我们从在线珠宝市场worthy.com上证明了这种方法。我们研究了有价值的业务流程,以确定相关的决策和结果,并对他们制定的方式进行正式的假设。使用因果工具,我们选择了一个决策点,改善AE接触活动似乎是有希望的。然后,我们制定了一个个性化的政策,建议仅与对其有益的客户联系。最后,我们在3个月内验证了A \ B测试中的结果,从而导致目标人群的项目交付率增加了22%(p值= 0.026)。现在,该政策正在持续使用。
translated by 谷歌翻译
数据科学有可能改善各种垂直领域的业务。尽管狮子的数据科学项目使用了一种预测方法,但这些预测应成为决策。但是,这种两步的方法不仅是最佳的,甚至可能降低性能并使项目失败。另一种选择是遵循规范性的框架,在该框架中,行动是“第一公民”,以便该模型制定规定采取行动的政策,而不是预测结果。在本文中,我们解释了为什么规定的方法很重要,并提供了分步方法论:规定的画布。后者旨在改善项目利益相关者的框架和沟通,包括项目和数据科学经理,以成功地产生业务影响。
translated by 谷歌翻译
标准联合优化方法成功地适用于单层结构的随机问题。然而,许多当代的ML问题 - 包括对抗性鲁棒性,超参数调整和参与者 - 批判性 - 属于嵌套的双层编程,这些编程包含微型型和组成优化。在这项工作中,我们提出了\ fedblo:一种联合交替的随机梯度方法来解决一般的嵌套问题。我们在存在异质数据的情况下为\ fedblo建立了可证明的收敛速率,并引入了二聚体,最小值和组成优化的变化。\ fedblo引入了多种创新,包括联邦高级计算和降低方差,以解决内部级别的异质性。我们通过有关超参数\&超代理学习和最小值优化的实验来补充我们的理论,以证明我们方法在实践中的好处。代码可在https://github.com/ucr-optml/fednest上找到。
translated by 谷歌翻译